45 research outputs found

    Inductive Relation Prediction from Relational Paths and Context with Hierarchical Transformers

    Full text link
    Relation prediction on knowledge graphs (KGs) is a key research topic. Dominant embedding-based methods mainly focus on the transductive setting and lack the inductive ability to generalize to new entities for inference. Existing methods for inductive reasoning mostly mine the connections between entities, i.e., relational paths, without considering the nature of head and tail entities contained in the relational context. This paper proposes a novel method that captures both connections between entities and the intrinsic nature of entities, by simultaneously aggregating RElational Paths and cOntext with a unified hieRarchical Transformer framework, namely REPORT. REPORT relies solely on relation semantics and can naturally generalize to the fully-inductive setting, where KGs for training and inference have no common entities. In the experiments, REPORT performs consistently better than all baselines on almost all the eight version subsets of two fully-inductive datasets. Moreover. REPORT is interpretable by providing each element's contribution to the prediction results.Comment: Accepted by ICASSP 2023 (Oral

    Not All Image Regions Matter: Masked Vector Quantization for Autoregressive Image Generation

    Full text link
    Existing autoregressive models follow the two-stage generation paradigm that first learns a codebook in the latent space for image reconstruction and then completes the image generation autoregressively based on the learned codebook. However, existing codebook learning simply models all local region information of images without distinguishing their different perceptual importance, which brings redundancy in the learned codebook that not only limits the next stage's autoregressive model's ability to model important structure but also results in high training cost and slow generation speed. In this study, we borrow the idea of importance perception from classical image coding theory and propose a novel two-stage framework, which consists of Masked Quantization VAE (MQ-VAE) and Stackformer, to relieve the model from modeling redundancy. Specifically, MQ-VAE incorporates an adaptive mask module for masking redundant region features before quantization and an adaptive de-mask module for recovering the original grid image feature map to faithfully reconstruct the original images after quantization. Then, Stackformer learns to predict the combination of the next code and its position in the feature map. Comprehensive experiments on various image generation validate our effectiveness and efficiency. Code will be released at https://github.com/CrossmodalGroup/MaskedVectorQuantization.Comment: accepted by CVPR 202

    Image Captioning with Context-Aware Auxiliary Guidance

    Full text link
    Image captioning is a challenging computer vision task, which aims to generate a natural language description of an image. Most recent researches follow the encoder-decoder framework which depends heavily on the previous generated words for the current prediction. Such methods can not effectively take advantage of the future predicted information to learn complete semantics. In this paper, we propose Context-Aware Auxiliary Guidance (CAAG) mechanism that can guide the captioning model to perceive global contexts. Upon the captioning model, CAAG performs semantic attention that selectively concentrates on useful information of the global predictions to reproduce the current generation. To validate the adaptability of the method, we apply CAAG to three popular captioners and our proposal achieves competitive performance on the challenging Microsoft COCO image captioning benchmark, e.g. 132.2 CIDEr-D score on Karpathy split and 130.7 CIDEr-D (c40) score on official online evaluation server

    Random Entity Quantization for Parameter-Efficient Compositional Knowledge Graph Representation

    Full text link
    Representation Learning on Knowledge Graphs (KGs) is essential for downstream tasks. The dominant approach, KG Embedding (KGE), represents entities with independent vectors and faces the scalability challenge. Recent studies propose an alternative way for parameter efficiency, which represents entities by composing entity-corresponding codewords matched from predefined small-scale codebooks. We refer to the process of obtaining corresponding codewords of each entity as entity quantization, for which previous works have designed complicated strategies. Surprisingly, this paper shows that simple random entity quantization can achieve similar results to current strategies. We analyze this phenomenon and reveal that entity codes, the quantization outcomes for expressing entities, have higher entropy at the code level and Jaccard distance at the codeword level under random entity quantization. Therefore, different entities become more easily distinguished, facilitating effective KG representation. The above results show that current quantization strategies are not critical for KG representation, and there is still room for improvement in entity distinguishability beyond current strategies. The code to reproduce our results is available at https://github.com/JiaangL/RandomQuantization.Comment: Accepted to EMNLP 202

    Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction

    Full text link
    Entities, as the essential elements in relation extraction tasks, exhibit certain structure. In this work, we formulate such structure as distinctive dependencies between mention pairs. We then propose SSAN, which incorporates these structural dependencies within the standard self-attention mechanism and throughout the overall encoding stage. Specifically, we design two alternative transformation modules inside each self-attention building block to produce attentive biases so as to adaptively regularize its attention flow. Our experiments demonstrate the usefulness of the proposed entity structure and the effectiveness of SSAN. It significantly outperforms competitive baselines, achieving new state-of-the-art results on three popular document-level relation extraction datasets. We further provide ablation and visualization to show how the entity structure guides the model for better relation extraction. Our code is publicly available.Comment: Accepted to AAAI 202

    A Bi2Te3-Filled Nickel Foam Film with Exceptional Flexibility and Thermoelectric Performance

    Get PDF
    The past decades have witnessed surging demand for wearable electronics, for which thermoelectrics (TEs) are considered a promising self-charging technology, as they are capable of converting skin heat into electricity directly. Bi2Te3 is the most-used TE material at room temperature, due to a high zT of ~1. However, it is different to integrate Bi2Te3 for wearable TEs owing to its intrinsic rigidity. Bi2Te3 could be flexible when made thin enough, but this implies a small electrical and thermal load, thus severely restricting the power output. Herein, we developed a Bi2Te3/nickel foam (NiFoam) composite film through solvothermal deposition of Bi2Te3 nanoplates into porous NiFoam. Due to the mesh structure and ductility of Ni Foam, the film, with a thickness of 160 μm, exhibited a high figure of merit for flexibility, 0.016, connoting higher output. Moreover, the film also revealed a high tensile strength of 12.7 ± 0.04 MPa and a maximum elongation rate of 28.8%. In addition, due to the film’s high electrical conductivity and enhanced Seebeck coefficient, an outstanding power factor of 850 μW m−1 K−2 was achieved, which is among the highest ever reported. A module fabricated with five such n-type legs integrated electrically in series and thermally in parallel showed an output power of 22.8 nW at a temperature gap of 30 K. This work offered a cost-effective avenue for making highly flexible TE films for power supply of wearable electronics by intercalating TE nanoplates into porous and meshed-structure materials

    Vitamin D and cause-specific vascular disease and mortality:a Mendelian randomisation study involving 99,012 Chinese and 106,911 European adults

    Get PDF
    corecore